Why Not Just
15 pages tagged "Why Not Just"
Can we list the ways a task could go disastrously wrong and tell an AI to avoid them?
Could we tell the AI to do what's morally right?
Why can't we just turn the AI off if it starts to misbehave?
Can you give an AI a goal which involves “minimally impacting the world”?
Can we test an AI to make sure it won't misbehave if it becomes superintelligent?
Can we constrain a goal-directed AI using specified rules?
Would it help to cut off the top few percent of a quantilizer's distribution?
Why don't we just not build AGI if it's so dangerous?
Aren't there easy solutions to AI alignment?
Why can’t we just “put the AI in a box” so that it can’t influence the outside world?
Why can’t we just use Asimov’s Three Laws of Robotics?
Why can't we just make a "child AI" and raise it?
Wouldn't humans triumph over a rogue AI because there are more of us?
Can't we limit damage from AI systems in the same ways we limit damage from companies?
But won't we just design AI to be helpful?